trust and distrust
In Generative AI We (Dis)Trust? Computational Analysis of Trust and Distrust in Reddit Discussions
Pessianzadeh, Aria, Sultana, Naima, Bulck, Hildegarde Van den, Gefen, David, Jabari, Shahin, Rezapour, Rezvaneh
The rise of generative AI (GenAI) has impacted many aspects of human life. As these systems become embedded in everyday practices, understanding public trust in them also becomes essential for responsible adoption and governance. Prior work on trust in AI has largely drawn from psychology and human-computer interaction, but there is a lack of computational, large-scale, and longitudinal approaches to measuring trust and distrust in GenAI and large language models (LLMs). This paper presents the first computational study of Trust and Distrust in GenAI, using a multi-year Reddit dataset (2022--2025) spanning 39 subreddits and 197,618 posts. Crowd-sourced annotations of a representative sample were combined with classification models to scale analysis. We find that Trust and Distrust are nearly balanced over time, with shifts around major model releases. Technical performance and usability dominate as dimensions, while personal experience is the most frequent reason shaping attitudes. Distinct patterns also emerge across trustors (e.g., experts, ethicists, general users). Our results provide a methodological framework for large-scale Trust analysis and insights into evolving public perceptions of GenAI.
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- Europe > Portugal > Braga > Braga (0.04)
- North America > United States > Alaska (0.04)
- Europe > Switzerland > Basel-City > Basel (0.04)
- Media > News (1.00)
- Information Technology (1.00)
Healthy Distrust in AI systems
Paaßen, Benjamin, Alpsancar, Suzana, Matzner, Tobias, Scharlau, Ingrid
Under the slogan of trustworthy AI, much of contemporary AI research is focused on designing AI systems and usage practices that inspire human trust and, thus, enhance adoption of AI systems. However, a person affected by an AI system may not be convinced by AI system design alone -- neither should they, if the AI system is embedded in a social context that gives good reason to believe that it is used in tension with a person's interest. In such cases, distrust in the system may be justified and necessary to build meaningful trust in the first place. We propose the term "healthy distrust" to describe such a justified, careful stance towards certain AI usage practices. We investigate prior notions of trust and distrust in computer science, sociology, history, psychology, and philosophy, outline a remaining gap that healthy distrust might fill and conceptualize healthy distrust as a crucial part for AI usage that respects human autonomy.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Europe > Switzerland > Basel-City > Basel (0.04)
- North America > United States > North Carolina > Durham County > Durham (0.04)
- (13 more...)
- Law (1.00)
- Health & Medicine (1.00)
- Energy (0.68)
- Information Technology > Security & Privacy (0.46)
Distrust in (X)AI -- Measurement Artifact or Distinct Construct?
Scharowski, Nicolas, Perrig, Sebastian A. C.
Trust is a key motivation in developing explainable artificial intelligence (XAI). However, researchers attempting to measure trust in AI face numerous challenges, such as different trust conceptualizations, simplified experimental tasks that may not induce uncertainty as a prerequisite for trust, and the lack of validated trust questionnaires in the context of AI. While acknowledging these issues, we have identified a further challenge that currently seems underappreciated - the potential distinction between trust as one construct and \emph{distrust} as a second construct independent of trust. While there has been long-standing academic discourse for this distinction and arguments for both the one-dimensional and two-dimensional conceptualization of trust, distrust seems relatively understudied in XAI. In this position paper, we not only highlight the theoretical arguments for distrust as a distinct construct from trust but also contextualize psychometric evidence that likewise favors a distinction between trust and distrust. It remains to be investigated whether the available psychometric evidence is sufficient for the existence of distrust or whether distrust is merely a measurement artifact. Nevertheless, the XAI community should remain receptive to considering trust and distrust for a more comprehensive understanding of these two relevant constructs in XAI.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- Europe > Germany > Hamburg (0.06)
- North America > United States > New York > New York County > New York City (0.05)
- (10 more...)
- Questionnaire & Opinion Survey (0.50)
- Research Report (0.50)
The Semantic Interpretation of Trust in Multiagent Interactions
Kalia, Anup Kumar (North Carolina State University)
We provide an approach to estimate trust between agents from their interactions. Our approach takes a probabilistic model of trust founded on commitments. We assume commitments to estimate trust because a commitment describes what an agent may expect of another. Therefore, the satisfaction or violation of a commitment provides a natural basis for determining how much to trust another agent. We evaluate our approach empirically. In one study, 30 subjects read emails extracted from the Enron dataset augmented with some synthetic emails to capture commitment operations missing in the Enron corpus. The subjects estimated trust between each pair of communicating participants. We trained model parameters for each subject with respect to our automated analysis of the emails, showing that our trained parameters yield a lower prediction error of a subject's trust rating given automatically inferred commitments than fixed parameters.